1 - 31. What did we learn in AI 1/2? (Part 1) [ID:30401]
50 von 150 angezeigt

I would like to kind of remind you what we did over the last year because for all the

technical detail we've had, especially in the last weeks, that may kind of block your

view over what we've achieved. You can also think of it as reviewing your whole life when

you're jumping down and then you kind of, supposedly your life flashes by you. So this

is the end of AI so we're going to flash your life, our life, by you. Okay, so what we did

last semester, what is often called symbolic AI. The main assumption here is that we can

do AI by representing the environment of the agents and their actions and consequences

and all of those kind of things in formal languages and instead of directly acting in

the world we can actually simulate and predict what our actions would do by manipulating

these formal languages. And what comes out of this idea is essentially search-based algorithms.

You start in a state that works in case you have a fully observable environment. Otherwise

you'll just have to, instead of operating on a state, on a kind of a belief state, namely

where you kind of entertain all the possibilities of which state I might be in. But essentially

that's the same, either you have a state or you have a belief state. And then look in

the agent's agent function, look at, try to predict what your actions will do. And depending

on what languages you give yourself to manipulate the internal knowledge, the internal state,

you have different ways of either doing search or doing inference. The big difference between

doing search and doing inference is on which objects you actually operate. When you're

doing search you're directly talking about the states. You're using states, you're manipulating

states, you're simulating your way through the state space. When you're doing inference

you're using a language to describe the state space. And instead of directly manipulating

objects of the state space, the states themselves, you manipulate descriptions. And logic in

a way is the culmination of this. You're not doing states at all. You're doing formulae

that describe state spaces. And the idea there is that inference can be much more efficient.

Why? Because instead of just enumerating the states, which may be many, infinitely many,

you only have few descriptions, at most countably many, but in practice much less. And that's

something we do. Think about talking about the natural numbers. You can either just write

them down and go on and on and on and on and on and on. Or you can say, well, what do we

have here? We have a zero, we have a successor, and we know a couple of things about them,

five to be exact. And instead of having this infinitely big thing here, you have this very

small thing here. And then you can do infinitely much work by manipulating the descriptions

instead of doing something to this infinite set, which just takes, obviously, infinitely

long. And that's the idea between search and inference. Of course, you can always bootstrap

your way into search, and you do. If you just, instead of having this set of natural numbers

in this example, you have the infinite set of expressions in here, then you can do search

again on this level. And you do. If you think about resolution proofs or tableau proofs

or those kind of things, you do search, but you do it one level up. Just like in unsupervised

learning, we're doing MDP stuff one level up. We see this all the time in AI. We have methods,

we add things like inference. We're going to use the same methods we have one level

up. And we've played this through in a succession of languages working our way from direct state

space search into fully description based formats, which we call logics. And even there,

we've looked at at least two different logics, propositional logic and first order logic.

There are more. There are more logics with different properties. So that was basically

what we did last semester. We had a little excursion into planning, and planning is essentially

adding actions and time to logic. What we did in strips planning was we just took propositional

logic and added delete and add lists and preconditions, which is essentially a very simple form of

time, something we did later when we went to Markov decision procedures as well. Two

things here. We have AI with symbols, which means, and that's a very important thing here,

which is different this semester, is we're assuming what is often called the physical

symbol hypothesis, namely that it's enough to describe the world in terms of language.

Teil eines Kapitels:
Chapter 31. What did we learn in AI 1/2?

Zugänglich über

Offener Zugang

Dauer

00:27:11 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 08:36:30

Sprache

en-US

A short recap of AI-1 with a more detailed look at the different agent types. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen